Goto

Collaborating Authors

 nonmonotonic reasoning


Interpolation in Knowledge Representation

Jung, Jean Christoph, Koopmann, Patrick, Knorr, Matthias

arXiv.org Artificial Intelligence

Craig interpolation and uniform interpolation have many applications in knowledge representation, including explainability, forgetting, modularization and reuse, and even learning. At the same time, many relevant knowledge representation formalisms do in general not have Craig or uniform interpolation, and computing interpolants in practice is challenging. We have a closer look at two prominent knowledge representation formalisms, description logics and logic programming, and discuss theoretical results and practical methods for computing interpolants.


On the Boolean Network Theory of Datalog$^\neg$

Trinh, Van-Giang, Benhamou, Belaid, Soliman, Sylvain, Fages, François

arXiv.org Artificial Intelligence

Datalog$^\neg$ is a central formalism used in a variety of domains ranging from deductive databases and abstract argumentation frameworks to answer set programming. Its model theory is the finite counterpart of the logical semantics developed for normal logic programs, mainly based on the notions of Clark's completion and two-valued or three-valued canonical models including supported, stable, regular and well-founded models. In this paper we establish a formal link between Datalog$^\neg$ and Boolean network theory first introduced for gene regulatory networks. We show that in the absence of odd cycles in a Datalog$^\neg$ program, the regular models coincide with the stable models, which entails the existence of stable models, and in the absence of even cycles, we prove the uniqueness of stable partial models and regular models. This connection also gives new upper bounds on the numbers of stable partial, regular, and stable models of a Datalog$^\neg$ program using the cardinality of a feedback vertex set in its atom dependency graph. Interestingly, our connection to Boolean network theory also points us to the notion of trap spaces. In particular we show the equivalence between subset-minimal stable trap spaces and regular models.


Temporal Many-valued Conditional Logics: a Preliminary Report

Alviano, Mario, Giordano, Laura, Dupré, Daniele Theseider

arXiv.org Artificial Intelligence

In this paper we propose a many-valued temporal conditional logic. We start from a many-valued logic with typicality, and extend it with the temporal operators of the Linear Time Temporal Logic (LTL), thus providing a formalism which is able to capture the dynamics of a system, trough strict and defeasible temporal properties. We also consider an instantiation of the formalism for gradual argumentation.


A Primer for Preferential Non-Monotonic Propositional Team Logics

Sauerwald, Kai, Kontinen, Juha

arXiv.org Artificial Intelligence

This paper considers KLM-style preferential non-monotonic reasoning in the setting of propositional team semantics. We show that team-based propositional logics naturally give rise to cumulative non-monotonic entailment relations. Motivated by the non-classical interpretation of disjunction in team semantics, we give a precise characterization for preferential models for propositional dependence logic satisfying all of System P postulates. Furthermore, we show how classical entailment and dependence logic entailment can be expressed in terms of non-trivial preferential models.


Eliminating Unintended Stable Fixpoints for Hybrid Reasoning Systems

Killen, Spencer, You, Jia-Huai

arXiv.org Artificial Intelligence

A wide variety of nonmonotonic semantics can be expressed as approximators defined under AFT (Approximation Fixpoint Theory). Using traditional AFT theory, it is not possible to define approximators that rely on information computed in previous iterations of stable revision. However, this information is rich for semantics that incorporate classical negation into nonmonotonic reasoning. In this work, we introduce a methodology resembling AFT that can utilize priorly computed upper bounds to more precisely capture semantics. We demonstrate our framework's applicability to hybrid MKNF (minimal knowledge and negation as failure) knowledge bases by extending the state-of-the-art approximator.


Learning Assumption-based Argumentation Frameworks

Proietti, Maurizio, Toni, Francesca

arXiv.org Artificial Intelligence

We propose a novel approach to logic-based learning which generates assumption-based argumentation (ABA) frameworks from positive and negative examples, using a given background knowledge. These ABA frameworks can be mapped onto logic programs with negation as failure that may be non-stratified. Whereas existing argumentation-based methods learn exceptions to general rules by interpreting the exceptions as rebuttal attacks, our approach interprets them as undercutting attacks. Our learning technique is based on the use of transformation rules, including some adapted from logic program transformation rules (notably folding) as well as others, such as rote learning and assumption introduction. We present a general strategy that applies the transformation rules in a suitable order to learn stratified frameworks, and we also propose a variant that handles the non-stratified case. We illustrate the benefits of our approach with a number of examples, which show that, on one hand, we are able to easily reconstruct other logic-based learning approaches and, on the other hand, we can work out in a very simple and natural way problems that seem to be hard for existing techniques.


On Trivalent Logics, Compound Conditionals, and Probabilistic Deduction Theorems

Gilio, Angelo, Over, David E., Pfeifer, Niki, Sanfilippo, Giuseppe

arXiv.org Artificial Intelligence

In this paper we recall some results for conditional events, compound conditionals, conditional random quantities, p-consistency, and p-entailment. Then, we show the equivalence between bets on conditionals and conditional bets, by reviewing de Finetti's trivalent analysis of conditionals. But our approach goes beyond de Finetti's early trivalent logical analysis and is based on his later ideas, aiming to take his proposals to a higher level. We examine two recent articles that explore trivalent logics for conditionals and their definitions of logical validity and compare them with our approach to compound conditionals. We prove a Probabilistic Deduction Theorem for conditional events. After that, we study some probabilistic deduction theorems, by presenting several examples. We focus on iterated conditionals and the invalidity of the Import-Export principle in the light of our Probabilistic Deduction Theorem. We use the inference from a disjunction, "$A$ or $B$", to the conditional,"if not-$A$ then $B$", as an example to show the invalidity of the Import-Export principle. We also introduce a General Import-Export principle and we illustrate it by examining some p-valid inference rules of System P. Finally, we briefly discuss some related work relevant to AI.


Deontic Meta-Rules

Olivieri, Francesco, Governatori, Guido, Cristani, Matteo, Rotolo, Antonino, Sattar, Abdul

arXiv.org Artificial Intelligence

The use of meta-rules in logic, i.e., rules whose content includes other rules, has recently gained attention in the setting of non-monotonic reasoning: a first logical formalisation and efficient algorithms to compute the (meta)-extensions of such theories were proposed in Olivieri et al (2021) This work extends such a logical framework by considering the deontic aspect. The resulting logic will not just be able to model policies but also tackle well-known aspects that occur in numerous legal systems. The use of Defeasible Logic (DL) to model meta-rules in the application area we just alluded to has been investigated. Within this line of research, the study mentioned above was not focusing on the general computational properties of meta-rules. This study fills this gap with two major contributions. First, we introduce and formalise two variants of Defeasible Deontic Logic with Meta-Rules to represent (1) defeasible meta-theories with deontic modalities, and (2) two different types of conflicts among rules: Simple Conflict Defeasible Deontic Logic, and Cautious Conflict Defeasible Deontic Logic. Second, we advance efficient algorithms to compute the extensions for both variants.


On Establishing Robust Consistency in Answer Set Programs

Thevapalan, Andre, Kern-Isberner, Gabriele

arXiv.org Artificial Intelligence

Answer set programs used in real-world applications often require that the program is usable with different input data. This, however, can often lead to contradictory statements and consequently to an inconsistent program. Causes for potential contradictions in a program are conflicting rules. In this paper, we show how to ensure that a program $\mathcal{P}$ remains non-contradictory given any allowed set of such input data. For that, we introduce the notion of conflict-resolving $\lambda$- extensions. A conflict-resolving $\lambda$-extension for a conflicting rule $r$ is a set $\lambda$ of (default) literals such that extending the body of $r$ by $\lambda$ resolves all conflicts of $r$ at once. We investigate the properties that suitable $\lambda$-extensions should possess and building on that, we develop a strategy to compute all such conflict-resolving $\lambda$-extensions for each conflicting rule in $\mathcal{P}$. We show that by implementing a conflict resolution process that successively resolves conflicts using $\lambda$-extensions eventually yields a program that remains non-contradictory given any allowed set of input data.


Tachmazidis

AAAI Conferences

We are witnessing an explosion of available data from the Web, government authorities, scientific databases, sensors and more. Such datasets could benefit from the introduction of rule sets encoding commonly accepted rules or facts, application- or domain-specific rules, commonsense knowledge etc. This raises the question of whether, how, and to what extent knowledge representation methods are capable of handling the vast amounts of data for these applications. In this paper, we consider nonmonotonic reasoning, which has traditionally focused on rich knowledge structures. In particular, we consider defeasible logic, and analyze how parallelization, using the MapReduce framework, can be used to reason with defeasible rules over huge data sets. Our experimental results demonstrate that defeasible reasoning with billions of data is performant, and has the potential to scale to trillions of facts.